Goto

Collaborating Authors

 dictionary function


Integrated utilization of equations and small dataset in the Koopman operator: applications to forward and inverse problems

Ohta, Ichiro, Koyanagi, Shota, Kinjo, Kayo, Ohkubo, Jun

arXiv.org Artificial Intelligence

In recent years, there has been a growing interest in data-driven approaches in physics, such as extended dynamic mode decomposition (EDMD). The EDMD algorithm focuses on nonlinear time-evolution systems, and the constructed Koopman matrix yields the next-time prediction with only linear matrix-product operations. Note that data-driven approaches generally require a large dataset. However, assume that one has some prior knowledge, even if it may be ambiguous. Then, one could achieve sufficient learning from only a small dataset by taking advantage of the prior knowledge. This paper yields methods for incorporating ambiguous prior knowledge into the EDMD algorithm. The ambiguous prior knowledge in this paper corresponds to the underlying time-evolution equations with unknown parameters. First, we apply the proposed method to forward problems, i.e., prediction tasks. Second, we propose a scheme to apply the proposed method to inverse problems, i.e., parameter estimation tasks. We demonstrate the learning with only a small dataset using guiding examples, i.e., the Duffing and the van der Pol systems.


Deep Koopman operator framework for causal discovery in nonlinear dynamical systems

Nathaniel, Juan, Roesch, Carla, Buch, Jatan, DeSantis, Derek, Rupe, Adam, Lamb, Kara, Gentine, Pierre

arXiv.org Artificial Intelligence

We use a deep Koopman operator-theoretic formalism to develop a novel causal discovery algorithm, Kausal. Causal discovery aims to identify cause-effect mechanisms for better scientific understanding, explainable decision-making, and more accurate modeling. Standard statistical frameworks, such as Granger causality, lack the ability to quantify causal relationships in nonlinear dynamics due to the presence of complex feedback mechanisms, timescale mixing, and nonstationarity. This presents a challenge in studying many real-world systems, such as the Earth's climate. Meanwhile, Koopman operator methods have emerged as a promising tool for approximating nonlinear dynamics in a linear space of observables. In Kausal, we propose to leverage this powerful idea for causal analysis where optimal observables are inferred using deep learning. Causal estimates are then evaluated in a reproducing kernel Hilbert space, and defined as the distance between the marginal dynamics of the effect and the joint dynamics of the cause-effect observables. Our numerical experiments demonstrate Kausal's superior ability in discovering and characterizing causal signals compared to existing approaches of prescribed observables. Lastly, we extend our analysis to observations of El Niño-Southern Oscillation highlighting our algorithm's applicability to real-world phenomena. Our code is available at https://github.com/juannat7/kausal.


NN-ResDMD: Learning Koopman Representations for Complex Dynamics with Spectral Residuals

Xu, Yuanchao, Shao, Kaidi, Logothetis, Nikos, Shen, Zhongwei

arXiv.org Artificial Intelligence

Analyzing long-term behaviors in high-dimensional nonlinear dynamical systems remains a significant challenge. The Koopman operator framework has emerged as a powerful tool to address this issue by providing a globally linear perspective on nonlinear dynamics. However, existing methods for approximating the Koopman operator and its spectral components, particularly in large-scale systems, often lack robust theoretical guarantees. Residual Dynamic Mode Decomposition (ResDMD) introduces a spectral residual measure to assess the convergence of the estimated Koopman spectrum, which helps filter out spurious spectral components. Nevertheless, it depends on pre-computed spectra, thereby inheriting their inaccuracies. To overcome its limitations, we introduce the Neural Network-ResDMD (NN-ResDMD), a method that directly estimates Koopman spectral components by minimizing the spectral residual. By leveraging neural networks, NN-ResDMD automatically identifies the optimal basis functions of the Koopman invariant subspace, eliminating the need for manual selection and improving the reliability of the analysis. Experiments on physical and biological systems demonstrate that NN-ResDMD significantly improves both accuracy and scalability, making it an effective tool for analyzing complex dynamical systems.


Extraction of nonlinearity in neural networks with Koopman operator

Sugishita, Naoki, Kinjo, Kayo, Ohkubo, Jun

arXiv.org Artificial Intelligence

Nonlinearity plays a crucial role in deep neural networks. In this paper, we investigate the degree to which the nonlinearity of the neural network is essential. For this purpose, we employ the Koopman operator, extended dynamic mode decomposition, and the tensor-train format. The Koopman operator approach has been recently developed in physics and nonlinear sciences; the Koopman operator deals with the time evolution in the observable space instead of the state space. Since we can replace the nonlinearity in the state space with the linearity in the observable space, it is a hopeful candidate for understanding complex behavior in nonlinear systems. Here, we analyze learned neural networks for the classification problems. As a result, the replacement of the nonlinear middle layers with the Koopman matrix yields enough accuracy in numerical experiments. In addition, we confirm that the pruning of the Koopman matrix gives sufficient accuracy even at high compression ratios. These results indicate the possibility of extracting some features in the neural networks with the Koopman operator approach.


Compression of the Koopman matrix for nonlinear physical models via hierarchical clustering

Nishikata, Tomoya, Ohkubo, Jun

arXiv.org Artificial Intelligence

Machine learning methods allow the prediction of nonlinear dynamical systems from data alone. The Koopman operator is one of them, which enables us to employ linear analysis for nonlinear dynamical systems. The linear characteristics of the Koopman operator are hopeful to understand the nonlinear dynamics and perform rapid predictions. The extended dynamic mode decomposition (EDMD) is one of the methods to approximate the Koopman operator as a finite-dimensional matrix. In this work, we propose a method to compress the Koopman matrix using hierarchical clustering. Numerical demonstrations for the cart-pole model and comparisons with the conventional singular value decomposition (SVD) are shown; the results indicate that the hierarchical clustering performs better than the naive SVD compressions.


d554f7bb7be44a7267068a7df88ddd20-Reviews.html

Neural Information Processing Systems

Summary: The paper proposes a multivariate stochastic process for modeling time series which incorporates locally varying smoothness in the mean and in the covariance matrix. The process uses latent dictionary functions with nested Gaussian process priors; the dictionary functions are linearly related to the observations through a sparse mapping. The authors outline MCMC and online algorithms for approximate Bayesian inference and assess performances using simulation and processing of financial data. Quality: The paper extends the application of the nested Gaussian process priors in [23] to the multivariate case and employs them for both the mean and covariance. This constitutes a sensible extension, and the authors develop an effective inference algorithm.


Locally Adaptive Bayesian Multivariate Time Series

Neural Information Processing Systems

In modeling multivariate time series, it is important to allow time-varying smoothness in the mean and covariance process. In particular, there may be certain time intervals exhibiting rapid changes and others in which changes are slow. If such locally adaptive smoothness is not accounted for, one can obtain misleading inferences and predictions, with over-smoothing across erratic time intervals and under-smoothing across times exhibiting slow variation. This can lead to miscalibration of predictive intervals, which can be substantially too narrow or wide depending on the time. We propose a continuous multivariate stochastic process for time series having locally varying smoothness in both the mean and covariance matrix. This process is constructed utilizing latent dictionary functions in time, which are given nested Gaussian process priors and linearly related to the observed data through a sparse mapping. Using a differential equation representation, we bypass usual computational bottlenecks in obtaining MCMC and online algorithms for approximate Bayesian inference. The performance is assessed in simulations and illustrated in a financial application.


Physics-informed invertible neural network for the Koopman operator learning

Meng, Yuhuang, Huang, Jianguo, Qiu, Yue

arXiv.org Artificial Intelligence

In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, a Flow-based Dynamic Mode Decomposition that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.


Discovering interpretable Lagrangian of dynamical systems from data

Tripura, Tapas, Chakraborty, Souvik

arXiv.org Artificial Intelligence

A complete understanding of physical systems requires models that are accurate and obeys natural conservation laws. Recent trends in representation learning involve learning Lagrangian from data rather than the direct discovery of governing equations of motion. The generalization of equation discovery techniques has huge potential; however, existing Lagrangian discovery frameworks are black-box in nature. This raises a concern about the reusability of the discovered Lagrangian. In this article, we propose a novel data-driven machine-learning algorithm to automate the discovery of interpretable Lagrangian from data. The Lagrangian are derived in interpretable forms, which also allows the automated discovery of conservation laws and governing equations of motion. The architecture of the proposed framework is designed in such a way that it allows learning the Lagrangian from a subset of the underlying domain and then generalizing for an infinite-dimensional system. The fidelity of the proposed framework is exemplified using examples described by systems of ordinary differential equations and partial differential equations where the Lagrangian and conserved quantities are known.


Dictionary-based Manifold Learning

Zhang, Hanyu, Koelle, Samson, Meila, Marina

arXiv.org Artificial Intelligence

We propose a paradigm for interpretable Manifold Learning for scientific data analysis, whereby we parametrize a manifold with $d$ smooth functions from a scientist-provided dictionary of meaningful, domain-related functions. When such a parametrization exists, we provide an algorithm for finding it based on sparse non-linear regression in the manifold tangent bundle, bypassing more standard manifold learning algorithms. We also discuss conditions for the existence of such parameterizations in function space and for successful recovery from finite samples. We demonstrate our method with experimental results from a real scientific domain.